Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
过去,图像检索是用于跨视图地理位置和无人机视觉本地化任务的主流解决方案。简而言之,图像检索的方式是通过过渡角度获得最终所需的信息,例如GPS。但是,图像检索的方式并非完全端到端。并且有一些多余的操作,例如需要提前准备功能库以及画廊构造的抽样间隔问题,这使得很难实施大规模应用程序。在本文中,我们提出了一个端到端定位方案,使用图像(FPI)查找点,该方案旨在通过源A的图像(无人机 - - 看法)。为了验证我们的框架的可行性,我们构建了一个新的数据集(UL14),该数据集旨在解决无人机视觉自我定位任务。同时,我们还建立了一个基于变压器的基线以实现端到端培训。另外,先前的评估方法不再适用于FPI框架。因此,提出了米级准确性(MA)和相对距离评分(RDS)来评估无人机定位的准确性。同时,我们初步比较了FPI和图像检索方法,而FPI的结构在速度和效率方面都可以提高性能。特别是,由于不同观点与剧烈的空间量表转换之间的巨大差异,FPI的任务仍然是巨大的挑战。
translated by 谷歌翻译
在这项工作中,引入了SVEIDR模型及其变体(老年,疫苗接种模型),以编码不同年龄段和疫苗接种状态的社会接触影响。然后,我们在模拟和现实世界数据上实现了物理信息的神经网络。本文显示了包括从神经网络中学到的COVID-19的传播和预测分析的结果。
translated by 谷歌翻译
Multiview检测使用多个校准摄像机,并具有重叠的视野来定位遮挡的行人。在该领域,现有方法通常采用``人类建模 - 聚合''策略。为了找到强大的行人表示,有些人直观地使用检测到的2D边界框的位置,而另一些则使用投影到地面上的整个框架功能。但是,前者不考虑人类的外表,并导致许多歧义,而后者由于缺乏人类躯干和头部的准确高度而遭受投影错误。在本文中,我们提出了一种基于人类点云建模的新行人代表方案。具体而言,使用射线跟踪进行整体人类深度估计,我们将行人建模为直立的,薄的纸板点云。然后,我们通过多个视图汇总了行人纸板的点云以进行最终决定。与现有表示形式相比,提出的方法明确利用人类的外观并通过相对准确的高度估计大大减少投影误差。在两个标准评估基准上,提出的方法取得了非常具竞争力的结果。
translated by 谷歌翻译
多视图检测包含多个相机视图,以减轻拥挤的场景中的闭塞,最先进的方法采用单独的转换来将多视图功能投影到地面平面。然而,我们发现这些2D变换不考虑物体的高度,并且这种疏忽沿着相同对象的垂直方向的忽略特征可能不会投影到相同的接地平面上,导致不纯的接地平面特征。为了解决这个问题,我们提出了VFA,Voxized 3D特征聚合,用于多视图检测中的功能转换和聚合。具体而言,我们将3D空间体制出来,将体素投影到每个相机视图上,并将2D功能与这些投影的体素相关联。这允许我们沿相同的垂直线识别然后聚合2D特征,在很大程度上减轻投影失真。此外,由于不同种类的物体(人与牛)在地面上具有不同的形状,因此我们引入了定向的高斯编码以匹配这种形状,从而提高准确性和效率。我们对多视图2D检测和多视图3D检测问题进行实验。结果四个数据集(包括新引入的Multiviewc数据集)表明,与最先进的方法相比,我们的系统与最有竞争力。 %我们的代码和数据将是开放的.code和multiviewc在https://github.com/robert-mar/vfa发布。
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译
Robust prediction of citywide traffic flows at different time periods plays a crucial role in intelligent transportation systems. While previous work has made great efforts to model spatio-temporal correlations, existing methods still suffer from two key limitations: i) Most models collectively predict all regions' flows without accounting for spatial heterogeneity, i.e., different regions may have skewed traffic flow distributions. ii) These models fail to capture the temporal heterogeneity induced by time-varying traffic patterns, as they typically model temporal correlations with a shared parameterized space for all time periods. To tackle these challenges, we propose a novel Spatio-Temporal Self-Supervised Learning (ST-SSL) traffic prediction framework which enhances the traffic pattern representations to be reflective of both spatial and temporal heterogeneity, with auxiliary self-supervised learning paradigms. Specifically, our ST-SSL is built over an integrated module with temporal and spatial convolutions for encoding the information across space and time. To achieve the adaptive spatio-temporal self-supervised learning, our ST-SSL first performs the adaptive augmentation over the traffic flow graph data at both attribute- and structure-levels. On top of the augmented traffic graph, two SSL auxiliary tasks are constructed to supplement the main traffic prediction task with spatial and temporal heterogeneity-aware augmentation. Experiments on four benchmark datasets demonstrate that ST-SSL consistently outperforms various state-of-the-art baselines. Since spatio-temporal heterogeneity widely exists in practical datasets, the proposed framework may also cast light on other spatial-temporal applications. Model implementation is available at https://github.com/Echo-Ji/ST-SSL.
translated by 谷歌翻译
Despite the remarkable progress of image captioning, existing captioners typically lack the controllable capability to generate desired image captions, e.g., describing the image in a rough or detailed manner, in a factual or emotional view, etc. In this paper, we show that a unified model is qualified to perform well in diverse domains and freely switch among multiple styles. Such a controllable capability is achieved by embedding the prompt learning into the image captioning framework. To be specific, we design a set of prompts to fine-tune the pre-trained image captioner. These prompts allow the model to absorb stylized data from different domains for joint training, without performance degradation in each domain. Furthermore, we optimize the prompts with learnable vectors in the continuous word embedding space, avoiding the heuristic prompt engineering and meanwhile exhibiting superior performance. In the inference stage, our model is able to generate desired stylized captions by choosing the corresponding prompts. Extensive experiments verify the controllable capability of the proposed method. Notably, we achieve outstanding performance on two diverse image captioning benchmarks including COCO Karpathy split and TextCaps using a unified model.
translated by 谷歌翻译
Motivation: Enhancers are important cis-regulatory elements that regulate a wide range of biological functions and enhance the transcription of target genes. Although many state-of-the-art computational methods have been proposed in order to efficiently identify enhancers, learning globally contextual features is still one of the challenges for computational methods. Regarding the similarities between biological sequences and natural language sentences, the novel BERT-based language techniques have been applied to extracting complex contextual features in various computational biology tasks such as protein function/structure prediction. To speed up the research on enhancer identification, it is urgent to construct a BERT-based enhancer language model. Results: In this paper, we propose a multi-scale enhancer identification method (iEnhancer-ELM) based on enhancer language models, which treat enhancer sequences as natural language sentences that are composed of k-mer nucleotides. iEnhancer-ELM can extract contextual information of multi-scale k-mers with positions from raw enhancer sequences. Benefiting from the complementary information of k-mers in multi-scale, we ensemble four iEnhancer-ELM models for improving enhancer identification. The benchmark comparisons show that our model outperforms state-of-the-art methods. By the interpretable attention mechanism, we finds 30 biological patterns, where 40% (12/30) are verified by a widely used motif tool (STREME) and a popular dataset (JASPAR), demonstrating our model has a potential ability to reveal the biological mechanism of enhancer. Availability: The source code are available at https://github.com/chen-bioinfo/iEnhancer-ELM Contact: junjiechen@hit.edu.cn and junjie.chen.hit@gmail.com; Supplementary information: Supplementary data are available at Bioinformatics online.
translated by 谷歌翻译
A diffusion model learns to predict a vector field of gradients. We propose to apply chain rule on the learned gradients, and back-propagate the score of a diffusion model through the Jacobian of a differentiable renderer, which we instantiate to be a voxel radiance field. This setup aggregates 2D scores at multiple camera viewpoints into a 3D score, and repurposes a pretrained 2D model for 3D data generation. We identify a technical challenge of distribution mismatch that arises in this application, and propose a novel estimation mechanism to resolve it. We run our algorithm on several off-the-shelf diffusion image generative models, including the recently released Stable Diffusion trained on the large-scale LAION dataset.
translated by 谷歌翻译